944 research outputs found

    Production of single-charm hadrons by quark combination mechanism in pp-Pb collisions at sNN=5.02\sqrt{s_{NN}}=5.02 TeV

    Full text link
    If QGP-like medium is created in pp-Pb collisions at extremely high collision energies, charm quarks that move in the medium can hadronize by capturing the co-moving light quark(s) or anti-quark(s) to form the charm hadrons. Using light quark pTp_{T} spectra extracted from the experimental data of light hadrons and a charm quark pTp_{T} spectrum that is consistent with perturbative QCD calculations, the central-rapidity data of pTp_{T} spectra and the spectrum ratios for DD mesons in the low pTp_{T} range (pT7p_{T}\lesssim7 GeV) in minimum-bias pp-Pb collisions at sNN=5.02\sqrt{s_{NN}}=5.02 TeV are well described by quark combination mechanism in equal-velocity combination approximation. The Λc+/D0\Lambda_{c}^{+}/D^{0} ratio in quark combination mechanism exhibits the typical increase-peak-decrease behavior as the function of pTp_{T}, and the shape of the ratio for pT3p_{T}\gtrsim3 GeV is in agreement with the preliminary data of ALICE collaboration in central rapidity region 0.96<y<0.04-0.96<y<0.04 and those of LHCb collaboration in forward rapidity region 1.5<y<4.01.5<y<4.0. The global production of single-charm baryons is quantified using the preliminary data and the possible enhancement (relative to light flavor baryons) is discussed. The pTp_{T} spectra of Ξc0\Xi_{c}^{0}, Ωc0\Omega_{c}^{0} in minimum-bias events and those of single-charm hadrons in high-multiplicity event classes are predicted, which serves as the further test of the possible change of the hadronization characteristic for low pTp_{T} charm quarks in the small system created in pp-Pb collisions at LHC energies.Comment: 13 pages, 8 figure

    A method for incremental discovery of financial event types based on anomaly detection

    Full text link
    Event datasets in the financial domain are often constructed based on actual application scenarios, and their event types are weakly reusable due to scenario constraints; at the same time, the massive and diverse new financial big data cannot be limited to the event types defined for specific scenarios. This limitation of a small number of event types does not meet our research needs for more complex tasks such as the prediction of major financial events and the analysis of the ripple effects of financial events. In this paper, a three-stage approach is proposed to accomplish incremental discovery of event types. For an existing annotated financial event dataset, the three-stage approach consists of: for a set of financial event data with a mixture of original and unknown event types, a semi-supervised deep clustering model with anomaly detection is first applied to classify the data into normal and abnormal events, where abnormal events are events that do not belong to known types; then normal events are tagged with appropriate event types and abnormal events are reasonably clustered. Finally, a cluster keyword extraction method is used to recommend the type names of events for the new event clusters, thus incrementally discovering new event types. The proposed method is effective in the incremental discovery of new event types on real data sets.Comment: 11 pages,4 figure

    Feasibility Study of Secondary Polymer Flooding in Henan Oilfield, China

    Get PDF
    After polymer flooding, it is necessary to find relay technology to retain oil yield. In this paper, laboratory experiments were conducted to investigate feasibility of secondary polymer flooding, in which injecting more polymer with higher concentration and relative molecular mass. It is necessary to determine enhanced recovery range and optimum concentration. With microscopic visible glass physical models, further start up of oil drops with secondary polymer can be observed distinctly. In the mean time, macroscopic heterogeneous core tests were carried out with permeability range of 2, 5 and 8. Polymer concentration effective and economical for flooding is optimized. Ii is shown that 3%~8% of further enhanced recovery and 20% decreased water cut can be obtained, and water profile can be improved to some extend after secondary polymer flooding. Thence, it is proved that secondary polymer injection after primary polymer can indeed further improve recovery and the technology of secondary flooding is feasible. Moreover, laboratory optimum concentration of 2200mg/L is determined. On the basis of laboratory results, from 2007 to 2008, filed trial with above optimum parameters were implemented. Up to 2008.12, water cut decreased from 92% to 83%, and cumulative increased crude oil of 5.71×104t.The success of secondary polymer flooding technology provides reference for the development of oil fields after primary polymer flooding in China and other regions in the whole world.Key words: secondary polymer flooding; feasibility study; microscopic mechanism; polymer concentration optimization; after polymer floodin

    Production properties of deuterons, helions and tritons via an analytical nucleon coalescence method in Pb-Pb collisions at sNN=2.76\sqrt{s_{NN}}=2.76 TeV

    Full text link
    We improve a nucleon coalescence model to include the coordinate-momentum correlation in nucleon joint distributions, and apply it to Pb-Pb collisions at sNN=2.76\sqrt{s_{NN}}=2.76 TeV to study production properties of deuterons (dd), helions (3^3He) and tritons (tt). We give formulas of the coalescence factors B2B_2 and B3B_3, and naturally explain their behaviors as functions of the collision centrality and the transverse momentum per nucleon pT/Ap_T/A. We reproduce the transverse momentum spectra, averaged transverse momenta and yield rapidity densities of dd, 3^3He and tt, and find the system effective radius obtained in the coalescence production of light nuclei behaves similarly to Hanbury Brown-Twiss interferometry radius. We particularly give expressions of yield ratios d/pd/p, 3^3He/d/d, t/pt/p, 3^3He/p/p, d/p2d/p^{2}, 3^3He/p3/p^3, t/3t/^3He and argue their nontrivial behaviors can be used to distinguish production mechanisms of light nuclei.Comment: 12 pages, 8 figures, 1 tabl

    Traditional Wooden Buildings in China

    Get PDF
    Chinese ancient architecture, with its long history, unique systematic features and wide-spread employment as well as its abundant heritages, is a valuable legacy of the whole world. Due to the particularity of the material and structure of Chinese ancient architecture, relatively research results are mostly published in Chinese, which limits international communication. On account of the studies carried out in Nanjing Forestry University and many other universities and teams, this chapter emphatically introduces the development, structural evolution and preservation of traditional Chinese wooden structure; research status focuses on material properties, decay pattern, anti-seismic performance and corresponding conservation and reinforcement technologies of the main load-bearing members in traditional Chinese wooden structure

    A General Framework for Accelerating Swarm Intelligence Algorithms on FPGAs, GPUs and Multi-core CPUs

    Get PDF
    Swarm intelligence algorithms (SIAs) have demonstrated excellent performance when solving optimization problems including many real-world problems. However, because of their expensive computational cost for some complex problems, SIAs need to be accelerated effectively for better performance. This paper presents a high-performance general framework to accelerate SIAs (FASI). Different from the previous work which accelerate SIAs through enhancing the parallelization only, FASI considers both the memory architectures of hardware platforms and the dataflow of SIAs, and it reschedules the framework of SIAs as a converged dataflow to improve the memory access efficiency. FASI achieves higher acceleration ability by matching the algorithm framework to the hardware architectures. We also design deep optimized structures of the parallelization and convergence of FASI based on the characteristics of specific hardware platforms. We take the quantum behaved particle swarm optimization algorithm (QPSO) as a case to evaluate FASI. The results show that FASI improves the throughput of SIAs and provides better performance through optimizing the hardware implementations. In our experiments, FASI achieves a maximum of 290.7Mbit/s throughput which is higher than several existing systems, and FASI on FPGAs achieves a better speedup than that on GPUs and multi-core CPUs. FASI is up to 123 times and not less than 1.45 times faster in terms of optimization time on Xilinx Kintex Ultrascale xcku040 when compares to Intel Core i7-6700 CPU/ NVIDIA GTX1080 GPU. Finally, we compare the differences of deploying FASI on hardware platforms and provide some guidelines for promoting the acceleration performance according to the hardware architectures
    corecore